专利摘要:
The present invention provides techniques for point-to-point Express Peripheral Component Interconnect (PCIe) storage transfers. In some embodiments, the techniques may be embodied as a method for performing point-to-point storage transfers between PCIe devices, including placing in memory of a first PCIe device a queue of PCIe devices. waiting for the data communicated between the first PCIe device and a target PCIe device, receiving, at the first PCIe device, queue memory allocation information transmitted by a coupled host device in communication with the first PCIe device and the target PCIe device, and generating, by means of a computer processor of the first PCIe device, a storage command.
公开号:FR3020886A1
申请号:FR1554030
申请日:2015-05-05
公开日:2015-11-13
发明作者:Colin Christopher Mccambridge;Christopher William Burr;Adam Christopher Geml
申请人:HGST Netherlands BV;
IPC主号:
专利说明:

[0001] SYSTEM AND METHOD FOR POINT-TO-POINT PCIe STORAGE TRANSFERS Background [0001] The Express Non-Volatile Memory (NVMe) Specification is a specification for access to solid state devices (SSDs) and other devices. Target devices attached by an Express Peripheral Component Interconnect (PCIe) bus.  The Non-Volatile Express Memory (NVMe) specification defines a control interface based on a single set of administrative control and termination queues and many sets of administrative input control and termination queues. / outputs (I / O).  Administrative queues are used for tasks such as creating and removing queues, querying device status, and configuring features, while I / O queues are used for storage-related transfers, such as block reads and writes.  However, the NVMe specification depends on host resources for command and control, to a degree that may result in a bottleneck or tighter point in system performance.  [0002] According to the NVMe specification, only the host CPU of a system is capable of sending storage commands to an NVMe controller.  In addition, the PCI Express system architecture faces two typical performance constraints.  First, typical PCI Express structures serving many devices (such as an enterprise storage backplane) have full upstream bandwidth (between a PCI Express switch and the host, upstream). lower than the downstream bandwidth (between this same PCI Express switch and all connected storage controllers, downstream).  This represents an excess of bandwidth downstream of the switch, which can not be fully utilized when only authorized traffic flows between the host and an endpoint NVMe controller.  Secondly, in a system that allows the host only to generate storage traffic to all controllers, the host resources (especially computing power / CPU and storage / dynamic random access memory (DRAM)) are a bottleneck affecting the overall performance of the system.  The overall latency and processing capacity of the system is related to the capabilities of the host.  The problem of latency is particularly detrimental to applications such as a high performance computing platform, in which a computing device such as a graphics processing unit (GPU) wishes to access a large amount of data. on a storage medium, but can not access it without the host acting as an intermediary to initiate storage transfers between the drive and the host's DRAM, and then other memory transfers in the host. the other way, between the DRAM of the host and the GPU.  [0003] Older attempts to solve these problems included proprietary and vendor-specific solutions that did not solve the problem of accessing a commercial NVM controller.  Devices not compatible with these proprietary and vendor-specific solutions could therefore not generate this traffic and were not compatible with the NVM Express protocol, since with the NVM Express protocol, it is only the host of the system that is allowed to generate traffic.  In view of the above, it can be understood that there are significant problems and defects associated with current technologies for PCIe 15 point-to-point storage transfers.  SUMMARY OF THE INVENTION [0005] The invention relates to techniques for point-to-point PCIe storage transfers.  In some embodiments, the techniques may be implemented as a method for performing point-to-point storage transfers between Express Peripheral Component Interconnect (PCIe) devices, including provisioning, in memory a first PCIe device, a queue for the data communicated between the first PCIe device and a target PCIe device, receiving, at the first PCIe device, memory queue allocation information, and waiting by a host device communicatively coupled to the first PCIe device and the target PCIe device, and generating, by means of a computer processor of the first PCIe device, a storage command.  [0006] According to additional aspects of this exemplary embodiment, point-to-point storage transfers may include storage transfers to or from a target device conforming to the non-volatile Express Memory (NVMe).  In other aspects of this exemplary embodiment, the queue may be assigned to an Express Peripheral Component Interconnect (PCIe) memory region allocated by a PCIe enumerator during the first time. initialization period of the first PCIe device.  [0008] According to additional aspects of this exemplary embodiment, the queue may be an I / O submission queue (input / output) for communication of a storage command to the device. PCIe target.  [0009] In other aspects of this exemplary embodiment, the queue may be an I / O termination (input / output) queue for receiving from the target PCIe device an indication of storage command termination.  According to additional aspects of this exemplary embodiment, the techniques may also include setting up, in the memory of the first PCIe device, a second queue for the data communicated between the first PCIe device. and the target PCIe device, and receiving, at the first PCIe device, queue memory allocation information from the host device for the second queue.  According to additional aspects of this exemplary embodiment, the second queue may comprise at least one of the following queues: an I / O submission queue (entry / output) for communication of a storage command to the target PCIe device and an I / O termination queue (input / output) for receiving from the target PCIe device a storage control termination indication.  In other aspects of this exemplary embodiment, the techniques may also include placing in the memory of the first PCIe device a data buffer.  According to additional aspects of this exemplary embodiment, a queue 25 may be put in place in the memory of the host device for at least one of the following actions: administrative submission, administrative termination, submission of I / O and I / O termination.  In other aspects of this exemplary embodiment, the techniques may also include placing a data buffer in the memory of the host device.  In other aspects of this exemplary embodiment, the techniques may also include determining a number of queues to be used on the first PCIe device and a number of queues to use on the host device, based on one or more of the following factors: the amount of memory available on the first PCIe device, the amount of memory available on the host device, the level of use of the host device, the level of use the first PCIe device, the amount of available bandwidth between an Express Device Interconnect (PCIe) switch and the host device, the amount of available bandwidth between the Express Peripheral Component Interconnect (PCIe) switch and the first PCIe device, and the amount of available bandwidth between an Express Peripheral Component Interconnect (PCIe) switch and the target PCIe device.  According to other aspects of this exemplary embodiment, the techniques may also include setting up, in the memory of a third PCIe device, a second queue for the data communicated between the first PCIe device and the target PCIe device, and receiving, at the first PCIe device, queue memory allocation information from the host device for the second queue.  [0017] According to additional aspects of this exemplary embodiment, the techniques may also include the launch, by the first PCIe device, of a storage transfer command by a storage in a submission queue of I / O of the generated storage transfer control, sending a point-to-point memory write for a target PCIe's doorbell, receiving, at the first PCIe device, a command reading memory from the target PCIe device to retrieve the storage transfer command, and transferring data between the first PCIe device and the target PCIe device.  According to additional aspects of this exemplary embodiment, the techniques may also include receiving a termination indication written by the target PCIe device in an I / O termination queue. receiving an interrupt of the target PCIe device, retrieving, by means of the first PCIe device, the termination in the I / O termination queue; and updating a bell of the target PCIe device.  According to other aspects of this exemplary embodiment, the target PCIe device may be in accordance with the MSI-X (Message Signaled Interrupts eXtended - message-indicated interrupts, extended) and the interrupt may be sent by the device. PCIe targets at an address in the memory of the first PCIe device (for example, an address in a PCIe memory map such as the PCIe BAR space of a first PCIe device).  In some embodiments, the memory may not be required in a first PCIe device for an interrupt, and hardware processing logic may be triggered from a PCIe memory write transaction (TLP) packet of the MSI interrupt. -X.  According to other aspects of this exemplary embodiment, the target PCIe device may not be in accordance with the Message Signaled Interrupts eXtended (MSI-X) messages and the interrupt may be sent by the target PCIe device at the host, the host relaying the interrupt to an address of the memory of the first PCIe device (for example, an address in a PCIe memory map such as a PCIe BAR space of a first PCIe device ).  In some embodiments, the memory may not be required in a first PCIe device for an interrupt, and hardware processing logic may be triggered from a PCIe memory write transaction (TLP) packet of the MSI interrupt. -X.  According to additional aspects of this exemplary embodiment, the host device may comprise at least one of the following: an enterprise server, a database server, a workstation, and a computer.  According to additional aspects of this exemplary embodiment, the target PCIe device may comprise at least one of the following: a graphics processing unit, an audio / video capture card, a hard disk, an adapter host bus and a non-volatile Express Memory (NVMe) controller.  In some embodiments, the target device may be an NVMe compliant device.  In some embodiments, the techniques used for point-to-point PCIe storage transfers may be embodied as a computer program product comprising a series of executable instructions on a computer. , the computer program product executing a process for point-to-point Express Device Interconnect (PCIe) storage transfers.  The computer program can implement the steps of placing a queue for the data communicated between the first PCIe device and a target PCIe device in the memory of a first PCIe device. receiving, at the first PCIe device, queue memory allocation information transmitted from a host device communicatively coupled to the first PCIe device and the target PCIe device, and generating, by means of a computer processor of the first PCIe device, a storage command.  In some embodiments, the techniques used for point-to-point PCIe storage transfers may be embodied as a Point-to-Point Express Peripheral Component Interconnect (PCIe) storage transfer system. .  The system may include a host device, a first Express Peripheral Component Interconnect (PCIe) device, a target Express Peripheral Component Interconnect (PCIe) device, and an Express Peripheral Component Interconnect (PCIe) coupling switch. in communication the first PCIe device, the target PCIe device and the host.  In some embodiments, it is a PCIe root complex feature or the PCIe structure that can provide connectivity, instead of a PCIe switch.  The first PCIe device may include stored Nonvolatile Express (NVMe) storage order submission instructions, the instructions containing an address of a queue located in the memory of the first PCIe device for I / O submission. , and instructions for generating an NVMe command.  The present invention will now be described in more detail with reference to its exemplary embodiments, shown in the accompanying drawings.  If the present invention is described below with reference to the exemplary embodiments, it should be understood that the present invention is not limited thereto.  Those skilled in the art having access to the teachings of the invention will recognize additional implementations, modifications, and embodiments, as well as other fields of use, which are within the scope of the invention. described herein, and for which the present invention may be of great utility.  Brief Description of the Drawings [0026] To provide a more complete understanding of the present invention, reference is now made to the accompanying drawings, in which similar elements are referenced by the same numerals.  These drawings should not be considered as limiting, as they are presented by way of example only.  Figure 1 shows an exemplary block diagram showing a plurality of PCIe devices in communication with a host device, according to an embodiment of the present invention.  Figure 2 shows an exemplary block diagram showing a plurality of PCIe devices in communication with a host device, according to an embodiment of the present invention.  Figure 3 shows an exemplary module for point-to-point PCIe storage transfers, according to an embodiment of the present invention.  Figure 4 shows a flowchart illustrating point-to-point PCIe storage transfers, according to an embodiment of the present invention.  Figure 5 shows a flowchart illustrating point-to-point PCIe storage transfers, according to an embodiment of the present invention.  Figure 6 shows a table of example parameters used to communicate the capabilities of the devices between a host and a co-host, according to an embodiment of the present invention.  Figure 7 shows a table of exemplary termination queue parameters, according to an embodiment of the present invention.  Figure 8 shows a table of exemplary submission queue parameters, according to an embodiment of the present invention.  Figure 9 shows a table of exemplary termination queue creation parameters, according to an embodiment of the present invention.  Figure 10 shows a table of example bid submission creation parameters, according to an embodiment of the present invention.  Figure 11 shows a table of exemplary interrupt vector configuration parameters, according to an embodiment of the present invention.  Description: The present invention relates to PCIe storage transfer point to point.  According to the NVMe specification, only the host CPU of a system is capable of sending storage commands to an NVMe controller.  Embodiments of the present invention provide systems and methods by which the host of a system can delegate some of its authority to a PCI Express device (i.e., a co-host), such that the co-host can send storage commands autonomously to the NVMe controller.  In some embodiments, storage commands may be sent to the NVMe controller with host interaction that remains limited, or without further interaction by the host.  This solves two typical performance constraints of a PCI Express system architecture, which provides higher performance over many metrics.  First, typical PCI Express structures serving many devices (such as an enterprise storage backplane) have full upstream bandwidth (between a PCI Express switch and the host, upstream) more weak than is the downstream bandwidth (between this same PCI Express switch and all connected storage controllers, downstream).  This represents an excess of bandwidth downstream of the switch, which can not be fully utilized when only authorized traffic flows between the host and an endpoint NVMe controller.  Secondly, in a system that allows the host only to generate storage traffic to all controllers, the host resources (especially computing power / CPU and storage / dynamic random access memory (DRAM)) are a bottleneck restricting the overall performance of the system.  The overall latency and processing capacity of the system is related to the capabilities of the host.  Embodiments of the present invention may allow point-to-point traffic between devices downstream of the switch and may allow more full use of the excess bandwidth downstream of the switch, thereby increasing dramatically the performance of the system in all kinds of applications.  Embodiments of the invention reduce or eliminate the involvement of a host in PCIe point-to-point storage transfers and allow a device such as a GPU to directly call storage transfers between the controller. NVM and itself, which pass through the shared PCI Express structure.  Potential applications may include, for example, self-assembling storage arrays, high performance point-to-point computational applications, point-accelerated caching, and point-accelerated defragmentation.  Point-to-point PCIe storage transfer techniques are discussed in more detail below.  [0041] Reference is now made to the drawings, in which Figure 1 is an exemplary block diagram showing a PCIe device in communication with a host device, according to an embodiment of the present invention.  Figure 1 includes a number of computer technologies such as a host system 102, a host CPU 104 and a PCI Express root complex 106.  The PCI Express switch 108 can couple in communication a plurality of targets (e.g. , PCIe devices such as NVMe targets), such as targets 110, 116, and 122, to the host system 102 through the PCI Express root complex 106.  The target 110 may contain an NVMe controller 112 and a nonvolatile storage 114.  The target 116 may contain an NVMe controller 118 and a nonvolatile storage 120.  Target 122 may contain NVMe controller 124 and nonvolatile storage 126.  The system memory 128 may contain memory-based resources accessible to the host system 102 through a memory interface (e.g. , synchronous dynamic random access memory with double speed transfer type three (DDR3 SDRAM)).  The system memory 128 may take any suitable form such as, but not limited to, solid state memory (e.g. , flash memory or solid state device (SSD)), optical memory or magnetic memory.  The system memory 128 may be volatile or non-volatile memory.  As illustrated in Figure 1, the system memory 128 may contain one or more data structures such as, for example, administrative submission queues 130, administrative termination queues 132, queues I / O submission 134, I / O termination queues 136, and data buffers 138.  The connection 140 between a PCI Express root complex 106 and a host CPU 104 may be a high bandwidth interconnection such as, for example, an Intel QuickPath Interconnect (QPI).  The connection 142 between the PCI Express root complex 106 and the PCI Express switch 108 may have a limited bandwidth (e.g. , a PCI Express interface offering 32 channels).  The connections 144, 146 and 148 may also have a limited bandwidth (e.g. , PCI Express interfaces with 8 channels each).  Only the connections 144, 146 and 148 are illustrated, but it can be understood that the number of targets connected to the PCI Express switch 108 may be smaller or substantially larger (e.g. , 96 devices).  As the number of targets connected to the PCI Express switch 108 increases, the bandwidth at the connection 142 may become a point of constriction.  According to some embodiments, non-PCIe interface standards may be used for one or more parts, including, but not limited to, Serial Advanced Technology Attachment (SATA) standards. , Advanced Technology Attachment (ATA), Small Computer System Interface (SCSI), Extended PCI (PCI-X), Fiber Channel, Serial Attached SCSI (SCSI) connected in series), SD (Secure Digital), EMMC (Embedded Multi-Media Card) and UFS (Universal Flash Storage).  The host system 102 may take any suitable form such as, but not limited to, an enterprise server, a database host, a workstation, a personal computer, a mobile phone , a gaming device, a digital personal assistant (PDA), an e-mail / text messaging device, a digital camera, a digital media player (e.g. , MP3), a GPS navigation device and a television system.  The host system 102 and the target device may contain additional components, which are not shown in Figure 1 for the sake of simplification of the drawing.  In addition, in some embodiments, not all components shown are present.  In addition, the different controllers, blocks and interfaces can be implemented in any suitable way.  For example, a controller may take the form of one or more of a microprocessor or processor and a computer readable medium that stores computer readable program code (e.g. , software or firmware) executable by the (micro) processor, logic gates, switches, an application specific integrated circuit (ASIC), a programmable logic controller and an integrated microcontroller, for example.  The NVMe specification defines a command interface based on a unique set of command and administrative termination queues (e.g. , administrative submission queues 130, and administrative termination queues 132) and many sets of I / O command and termination queues (e.g. , I / O submission queues 134, I / O termination queues 136).  Administrative queues are used for tasks such as creating and removing queues, querying device status, and configuring features, while I / O queues are used for all storage-related transfers, such as block reads and writes.  The NVMe specification is designed in such a way that it is a single host that controls and manages each of these resources.  As described below in more detail with reference to FIG. 2, the embodiments of the invention relate to systems and methods in which one or more sets of I / O queues can instead be owned. by another solid-state device connected to a host (e.g. , a PCIe device).  This can keep much of the storage network traffic away from a host and below a narrowing point (eg. , a connection between a PCIe switch and a host).  In addition, the embodiments of the invention may provide the possibility for a device in the solid state other than the host (e.g. , a PCIe device) to instantiate, process and / or receive one or more commands related to storage.  This instantiation, processing, and / or receipt of commands can reduce the CPU usage of a host, even when accessing queues of system memory.  10050] Figure 2 shows an exemplary block diagram showing a plurality of PCIe devices in communication with a host device, according to an embodiment of the present invention.  As shown in Figure 2, the host system components 102, PCI Express switch 108, target 116, and target 122 are as described with reference to Figure 1 above.  Administrative submission queues 130 and administrative termination queues also remain as described with reference to Figure 1 above.  As illustrated in Figure 2, the co-host 202 may be a solid state device (e.g. , a PCIe device) having PCIe device 204 as well as I / O submission queues 208, I / O termination queues 210, and data buffers 206.  10051] As illustrated, the movement of one or more data structures such as, for example, I / O queues, may allow more storage traffic (e.g. , the NVM traffic) to remain below the PCI Express switch 108 and away from the host system 102.  As illustrated in Figure 2, one or more storage management commands and / or data structures may be delegated to the co-host 202 that may allow it to autonomously generate NVM traffic for an NVM Express controller connected to the same PCI Express structure in point-to-point mode.  This may allow point-to-point storage transfers between a co-host 202 (e.g. , a PCI Express device) wishing to generate traffic and a commercial NVMe controller (e.g. target 116 and target 122) which adheres to the NVMe specification.  The co-host 202 may be any PCI Express device generating traffic for any purpose, such as a graphics processing unit, an audio / video capture card, another NVMe controller, or any other attached device. at PCI Express.  In some embodiments, performing stand-alone point-to-point storage transfers can be accomplished by delegating queues and mapping that delegation to existing PCI Express protocol primitives.  As noted above, the NVMe specification defines a control interface based on a single set of administrative and administrative termination queues and on many sets of command and control queues. I / O termination.  Administrative queues are used for tasks such as creating and removing queues, querying device status, and configuring features, while I / O queues are used for all storage-related transfers, such as block reads and writes.  The NVMe specification is designed so that it is a single host that controls and manages each of these resources.  As illustrated in Figure 2, one or more sets of I / O queues may instead be owned by a co-host 202 (e.g. , a PCI Express device).  The co-host 202 can use these I / O queues (e.g. , the I / O submission queues 208 and the I / O termination queues 210) for autonomously generating NVM traffic for a given NVM controller (e.g. target 116 and target 122).  In some embodiments, the host system 102 may create an I / O queue in the memory of a co-host 202.  The backup storage for these queues may be a memory range located within an allocated BAR space of the co-host 202.  BAR spaces are PCI Express memory regions assigned by the PCI enumerator during the device initialization period, which are accessible to all devices in a PCI Express structure.  The I / O queues of the co-host 202 may use ordinary bell addresses located within the BAR space of a target (e.g. , the memory addresses in the BAR space of the target 116 and / or the target 122), and can be assigned an MSI-X interrupt address which is also inside a space BAR allocated from co-host 202.  After the creation of a queue, the host system 102 can communicate the details of the allocation of the memory to the co-host 202.  The co-host 202 can then execute commands autonomously with respect to the host system 102, using the normal NVMe protocol.  According to some embodiments, the queue memory allocation information may be determined by the co-host 202 and sent by the co-host 202 to the host system 102.  Queue memory allocations may be carefully selected parameters that map one or more queuing operations to resources actually controlled by the co-host 202.  Specifically, a queue requires storage to keep queue entries, a "bell" BAR address on the target to indicate to the target that new entries are available in a queue bid wait or have been consumed in a terminating queue, and an interrupt vector for the target to indicate to the owner of the queue that new entries have been added by the target to a queue waiting for termination.  Finally, the actual command represented in the queue may require some storage space for the transfer to or from this space for execution of the command operation.  [0054] To submit an order in this instantiation, the co-host 202 may fill a submission entry in memory for an I / O submission queue 208 (e.g. , in the BAR memory).  After submitting an entry in the I / O submission queue 208, the co-host 202 may issue a point-to-point memory write to update a target's bell register for that queue. 'waiting.  For example, if the co-host 202 instantiates a storage command for the target 116, the co-host 202 can update a memory address of the target 116 (e.g. , in the BAR space), which has been supplied to the co-host 202 by the host system 102.  The co-host 202 can then wait for the termination of the command.  The target 116 may detect the call writing and may retrieve the command from the I / O submission queues 208 of the co-host 202 by issuing an appropriate memory read, and may process the command.  As illustrated in Figure 2, the I / O submission queues 208 are in the memory of the co-host 202.  According to some embodiments, the I / O submission queues may remain in the system memory 128 or may be in another location such as, for example, a memory space of another device. solid state (e.g. , a PCIe device connected to the same PCI Express structure).  The location of queues or other data structures such as data buffers may depend on one or more factors such as, but not limited to, the amount of memory available on the cohort, the amount memory available on the host device, the level of use of the host device, the level of co-host usage, the amount of available bandwidth between an Express Peripheral Component Interconnect (PCIe) switch and the device host, and the amount of available bandwidth between the Express Peripheral Component Interconnect (PCIe) switch and the co-host.  In embodiments in which a queue or other data structure remains in a memory associated with the host (e.g. , system memory 128), performance gains can still be achieved by delegating commands to a co-host and reducing the use of the host CPU.  -. 14 - [0055] When terminating a command, a target (e.g. target 116) issues a memory write to fill a termination entry in an I / O termination queue (e.g. , the I / O termination queues 210).  A target can then issue a memory write to signal the termination.  For targets that comply with Message Signaled Interrupts eXtended (MSI-X), the target can write to a configured MSI-X interrupt, with 202 cohort memory.  Since the interrupt address is also within the co-host BAR space, the memory write may be routed directly to the co-host by the PCI Express switch 108 rather than the co-host BAR space. to the host system 102.  If a target does not support the MSI-X interrupt mechanisms, the host system 102 could be used to relay the conventional interrupts (in interrogation mode, INTX or MSI) of a target to the co-host 202. , in the appropriate way.  This may add a conflict with the host CPU 104 as a performance limit, but may help increase the bandwidth through point-to-point data transfers.  During the configuration process, the host system 102 may be notified that a target does not support MSI-X interrupts and may be prepared to arbitrate the available interrupt mechanism on behalf of the co-host 202.  The co-host 202 may be notified of the arbitration, but this is not necessary.  When the host system 102 relays the interrupts for the co-host 202, the host system 102 can decode an interrupt according to the ordinary NVMe mechanism.  From the decoded information, the host system 102 may determine that the interrupt could be sent to the name of the delegated queue owned by the co-host 202.  In some embodiments, because of the sharing of interrupts between multiple queues, which causes ambiguities in decoding, the host system 102 may make the cautious decision to prevent the co-host 202, even if it does not. There is no work yet to be done to the co-host.  This is a compatibility option, and other configurations can be used for performance scenarios.  Once notified, the cohort 202 then knows that the terminating entry is available and can read it in the terminating queue, all at leisure.  To further reduce dependency on external devices, a co-host may use more allocated memory space (e.g. , BAR space) for storing data buffers (e.g. data buffers 206) which may contain control, dispersion-grouping lists or physical region pages (PRP) for processing a command.  Dispatch-grouping lists and Physical Region Pages (PRP) can describe memory locations of additional data buffers (which may or may not be in the same memory area as scatter-grouping lists or Physical Regions (PRP)).  In these embodiments, a target may transmit reads and writes into memory that are routed directly to co-host 202, and not to host system 102.  The application of these techniques may allow PCI Express devices or storage subsystems to achieve significant performance gains.  These embodiments can be leveraged for many applications, going beyond simple storage and data recovery, for example by accelerating high-performance computing platforms or enabling real-time capture. high bandwidth streaming media.  Embodiments may use the NVM protocol in an unmodified form (that is, any system may share NVM control components of commerce between the host and the co-host).  Figure 3 shows an exemplary module for point-to-point PCIe storage transfers, according to an embodiment of the present invention.  As illustrated in Figure 3, the point-to-point transfer module 310 may contain the queue creation module 312, the queue management module 314 and the command processing module 316.  The queue creation module 312 may, in some embodiments, reside on a host or storage medium associated with a host and may contain logic for creating and deleting queues. waiting.  This may include identification and allocation of memory space (e.g. , in the host-associated system memory, in the BAR space of a co-host, in another memory space associated with a co-host or in a memory space associated with a solid-state device such as a PCIe device coupled in communication with a co-host).  The queue management module 314 may, in some embodiments, reside on a co-host or on a storage medium associated with a co-host and may contain instructions for the processing. some orders.  The queue management module 314 may also store one or more host-provided memory locations associated with queues, bells, interrupts, data buffers, or other data structures.  The command processing module 316 may, in some embodiments, reside on a co-host or on a storage medium associated with a co-host and may contain instructions for the generation of the commands.  Figure 4 shows a flowchart illustrating point-to-point PCIe storage transfers, according to an embodiment of the present invention.  Process 400 is however given only as an example.  Process 400 may be corrupted, for example by adding, modifying, removing or rearranging steps.  In step 402, the process can begin.  In step 404, it can be determined whether a queue storage should be on a co-host or at another location.  In some embodiments, queues may be allocated on a separate PCIe device.  The location of queues or other data structures such as data buffers may depend on one or more factors such as, but not limited to, the amount of memory available on the cohort, the amount memory available on the host device, the level of use of the host device, the level of co-host usage, the amount of available bandwidth between an Express Peripheral Component Interconnect (PCIe) switch and the device host, and the amount of available bandwidth between the Express Peripheral Component Interconnect (PCIe) switch and the co-host.  In embodiments in which a queue or other data structure remains in a memory associated with the host (e.g. , system memory 128), performance gains can still be achieved by delegating commands to a co-host and reducing the use of the host CPU.  If a queue needs to be created on a co-host, the process can proceed to step 410.  If a queue needs to be created on a host, the process can proceed to step 406.  At step 406, queues (e.g. , I / O submission queues, and I / O termination queues) in system memory associated with a host (e.g. , the system memory 128 of Figure 1).  In some embodiments, a host may create queues and may send the details of memory allocation to a co-host.  In one or more embodiments, a co-host determines the memory allocation information and sends the memory allocation information to a host.  In step 408, a storage command may be submitted by a co-host to a host I / O submission queue.  This submission may not reduce the bandwidth tightening points that appear on a PCIe Express switch, but may still reduce the use of the host CPU.  In step 410, it is possible to create queues (e.g. , I / O submission queues and I / O termination queues) in the system memory associated with a co-host (e.g. , BAR memory).  In some embodiments, a host may create queues and may send the details of memory allocation to a co-host.  In one or more embodiments, a co-host determines the memory allocation information and sends the memory allocation information to a host.  In step 412, a storage command may be submitted by a co-host to a co-host I / O submission queue.  This submission can occur without the host being involved in any way and can reduce the traffic passing on a PCI Express switch and the use of a host CPU.  In step 414, a bell operation command may be issued by a co-host to an allocated memory space of a target device.  The allocated memory space of a bell on a target device may have been provided in advance to the co-host by the host.  In step 416, it can be determined whether the queues are on a host or a co-host.
[0002] In some embodiments, the queues may be allocated on a separate PCIe device. If the queues are located on a host, the method 400 may proceed to step 418. If the queues are located on a co-host, the method 400 may proceed to step 420. In step 418, an extraction command may be sent by a target device to a PCI Express switch and then forwarded to a host. In step 420, an extraction command may be sent by a target device to a PCI Express switch, and then sent to a co-host. In step 424, it is possible to determine whether physical region pages (PRPs) or dispersion-grouping lists exist in a data buffer located on a host or in a data buffer located in the memory of a co-host. If PRPs or dispersion-clustering lists exist in a data buffer located on a host, the method may proceed to step 428. If PRPs or dispersion-clustering lists exist in a data buffer located on a co-host, the method may continue at step 426. In step 426, data may be transferred between a co-host and a target device by means of a data buffer of a co-host. In step 428, data may be transferred between a co-host and a target device by means of a data buffer of a host. In step 430, the method 400 may terminate. [0078] FIG. 5 represents a flowchart illustrating PCIe point-to-point storage transfers, according to one embodiment of the present invention. Process 500 is, however, given only as an example. The process 500 can be corrupted, for example by adding, modifying, removing or rearranging steps. In step 502, the process 500 can begin. At step 504, it can be determined whether queues are on a host or a co-host. In some embodiments, queues may be allocated on a separate PCIe device. If the queues are located on a host, the method 500 may proceed to step 506. If the queues are located on a co-host, the method 500 may proceed to step 508. If the queues are located on a host, in step 506 a termination indication may be written to a host I / O termination queue. If the queues are located on a co-host, in step 508, a termination indication may be written to a co-host I / O termination queue. In step 510, it can be determined whether a target device supports MSI-X interrupts. If a target device supports MSI-X interrupts, the method 500 may continue at step 514. If a target device does not support MSI-X interrupts, the method 500 may proceed to step 512. In step 512, an interrupt not conforming to MSI-X (eg, interrogation interrupts, INTX or MSI) may be sent by a target device to a PCI Express switch, and then forwarded to a host. In step 516, the host can send the interrupt to the co-host. In step 514, a target supporting the MSI-X interrupts can send an interrupt, via a PCI Express switch, to a co-host. Because the interrupt address is also within the co-host BAR, the memory write is routed directly to the co-host by the PCI Express switch, rather than to the host. 'host. In step 518, the termination may be retrieved by the co-host from the I / O termination queue. In step 520, the co-host can update the bell on the target. In step 522, the process 500 may end. Figure 6 shows a table of example parameters for communicating the capabilities of the devices between a host and a co-host, according to an embodiment of the present invention. As shown in Figure 6, one or more parameters may be communicated between a co-host and a host for queue creation negotiation. The queue creation parameters may include, for example, parameters that define a size for the queue entries or a maximum size for the data transfers. The parameters may also include information regarding a maximum number of queue entries, the size of host system memory pages, or other configuration information. Fig. 7 shows a table of exemplary termination queue parameters, according to an embodiment of the present invention. As illustrated in Figure 7, one or more delegate termination queue parameters may be communicated (eg, from a host to a co-host) to indicate, for example, the size of the queues. termination, the memory address, the trap address, and other queuing metadata. Figure 8 shows a table of exemplary submission queue parameters, according to an embodiment of the present invention. As illustrated in Figure 8, one or more submission queue parameters may be provided, such as, for example, the memory address of a submission queue, the size of the queue, the priority and other metadata about queues. Figure 9 shows a table of exemplary termination queue creation parameters, according to an embodiment of the present invention. After negotiation of the queue creation parameters between the host and the co-host, standard NVMe commands may be sent by a host (eg, the owner of the target's administrative queues) to a target to create one or more delegated queues on behalf of the co-host. Examples of control parameters are shown in Figure 9. [0090] Figure 10 shows a table of example bid submission creation parameters, according to an embodiment of the present invention. After negotiation of the queue creation parameters between the host and the co-host, standard NVMe commands may be sent by a host (eg, the owner of the target's administrative queues) to a target to create one or more delegated queues on behalf of the co-host. Examples of control parameters are shown in Figure 10. [0091] Figure 11 shows a table of exemplary interrupt vector configuration parameters, according to one embodiment of the present invention. What is shown in Figure 11 is an example of an entry in the standardized MSI-X table of a target, identified by the delegated IV number in the Create I / O Termination Queue command. The target may use these interrupt parameters to route its MSI-X trap response, and the host may configure them as shown, so that the response may be routed directly to the co-host. Other embodiments are within the scope and spirit of the invention. For example, the functionality described above may be implemented by software, hardware, firmware, hardware cabling or any combination of these elements. One or more computer processors operating according to instructions may implement the functions associated with PCIe point-to-point storage transfers according to the present invention, described above. If this is the case, the fact that these instructions can be stored on one or more processor-readable non-transient storage media (eg, a magnetic disk or other storage media) is part of the scope of the present invention. In addition, the modules implementing the functions can also be located physically at different positions, which includes being distributed so that parts of the functions are implemented at different physical locations.
[0003] The scope of the present invention is not limited to the specific embodiments described herein. In fact, various other embodiments of the present invention and various modifications thereto, besides those described herein, will be apparent to those skilled in the art from the foregoing description and the accompanying drawings. These other embodiments and modifications are therefore likely to fall within the scope of the present invention. Furthermore, if the present invention is described herein in the context of a particular implementation in a particular environment for a particular purpose, one skilled in the art will recognize that its utility is not limited thereto and that the present The invention can be beneficially implemented in any number of environments for any number of purposes. The claims below, therefore, should be interpreted throughout the breadth and in the spirit of the present invention as described herein. -22-
权利要求:
Claims (4)
[0001]
CLAIMS: 1. A method for performing point-to-point storage transfers between Peripheral Component Interconnect Express (PCIe) devices, comprising: placing in memory a first PCIe device; a queue for data communicated between the first PCIe device and a target PCIe device; receiving, at the first PCIe device, queue memory allocation information transmitted by a host device communicatively coupled to the first PCIe device and the target PCIe device; and generating, by means of a computer processor of the first PCIe device, a storage command.
[0002]
The method of claim 1, wherein the point-to-point storage transfers include storage transfers to or from a target device conforming to NVMe (Non-Volatile Memory Express).
[0003]
The method of claim 2, wherein the queue is assigned to a PCIe memory region (Express Device Interconnect) assigned by a PCIe enumerator during the initialization period of the first PCIe device.
[0004]
The method of claim 1, wherein the queue comprises an I / O submission queue (input / output) for communication of a storage command to the target PCIe device. The method of claim 1, wherein the queue includes an I / O termination (input / output) queue for receiving from the target PCIe device a storage command termination indication. The method of claim 1, further comprising: placing in the memory of the first PCIe device a second queue for data communicated between the first PCIe device and the target PCIe device; and receiving, at the first PCIe device, queue memory allocation information transmitted by the host device for the second queue. The method of claim 6, wherein the second queue comprises at least one of the following queues: an I / O submission queue (input / output) for the communication of a storage command to the target PCIe device and an I / O termination queue (input / output) for receiving from the target PCIe device a storage command termination indication. The method of claim 1, further comprising: placing in the memory of the first PCIe device a data buffer. The method of claim 1, wherein a queue is set up in the memory of the host device for at least one of the following actions: administrative submission, administrative termination, I / O submission, and termination. I / O. The method of claim 1, further comprising: placing a data buffer in the memory of the host device. The method of claim 1, wherein determining a number of queues for use on the first PCIe device and a number of queues for use on the host device is based on one or more of the factors comprising: a quantity of memory available on the first PCIe device, a quantity of available memory on the host device, a level of use of the host device, a usage level of the first PCIe device, a quantity of available bandwidth between an Express Peripheral Component Interconnect (PCIe) switch and the host device, and an amount of available bandwidth between the Express Peripheral Component Interconnect (PCIe) switch and the first PCIe device. The method of claim 1, further comprising: placing in memory a third Express Peripheral Component Interconnect (PCIe) device, a second queue for data communicated between the first PCIe device and the target PCIe device; and receiving, at the first PCIe device, queue memory allocation information transmitted by the host device for the second queue. The method of claim 1, comprising: initiating, by the first PCIe device, a storage transfer command by a storage in an I / O submission queue of the storage transfer command. generated; issuing a point-to-point memory write to a doorbell register of the target PCIe device; receiving, at the first PCIe device, a memory read command issued by the target PCIe device for retrieving the storage transfer command, and transferring data between the first PCIe device and the target PCIe device. The method of claim 1 further comprising: receiving a termination indication written by the target PCIe device in an I / O termination queue; receiving an interrupt of the target PCIe device; extracting, by the first PCIe device, the termination in the I / O termination queue; and updating a bell of the target PCIe device. The method of claim 14, wherein the target PCIe device is in accordance with the Message Signaled Interrupts eXtended (MSI-X), and the interrupt is sent from the target PCIe device to an address in the memory of the target. first PCIe device. The method of claim 14, wherein the target PCIe device is not in accordance with MSI-X (message signaled interrupts eXtended), and the interrupt is sent from the target PCIe device to the host. and wherein the host relays the interrupt to an address in the memory of the first PCIe device 17. The method of claim 1, wherein the host device comprises at least one of the following: an enterprise server, an database server, a workstation, and a computer. The method of claim 1, wherein the target PCIe device comprises at least one of: a graphics processing unit, an audio / video capture card, and a non-volatile Express Memory (NVMe) controller. 19. Computer program product comprising a series of instructions executable on a computer, the computer program product executing a process for performing point-to-point storage transfers between Express Device Interconnect Devices (PCIe) ); said program product comprising, when the program is operating on a computer: computer readable programming means for setting up, in the memory of a first PCIe device, a queue for data communicated between the first PCIe device and a target PCIe device; computer readable programming means for receiving, at the first PCIe device, queue memory allocation information transmitted by a host device communicatively coupled to the first PCIe device and the target PCIe device; And computer readable programming means for generating, by means of a computer processor of the first PCIe device, a storage command. A system for performing point-to-point storage transfers between Express Peripheral Component Interconnect (PCIe) devices, the system comprising: a host device; a first Express Peripheral Component Interconnect (PCIe) device; a Target Peripheral Component Interconnect (PCIe) device; and an Express Peripheral Component Interconnect (PCIe) switch communicatively coupling the first PCIe device, the target PCIe device, and the host; wherein the first PCIe device includes non-volatile memory (NVMe) order submission instructions stored in memory, the instructions comprising: an address of a queue in the memory of the first PCIe device for the submission of I / O; and instructions for generating an NVMe command. 28
类似技术:
公开号 | 公开日 | 专利标题
FR3020886A1|2015-11-13|
US10079889B1|2018-09-18|Remotely accessible solid state drive
US10348830B1|2019-07-09|Virtual non-volatile memory express drive
US10585819B2|2020-03-10|SSD architecture for FPGA based acceleration
US9575657B2|2017-02-21|Dataset replica migration
US10241722B1|2019-03-26|Proactive scheduling of background operations for solid state drives
TW200846910A|2008-12-01|Hints model for optimization of storage devices connected to host and write optimization schema for storage devices
US9361257B2|2016-06-07|Mechanism for facilitating customization of multipurpose interconnect agents at computing devices
US10466906B2|2019-11-05|Accessing non-volatile memory express controller memory manager
FR3033061A1|2016-08-26|
EP3796179A1|2021-03-24|System, apparatus and method for processing remote direct memory access operations with a device-attached memory
KR20160031099A|2016-03-22|Storage device, data storage system having the same, and garbage collection method thereof
US20180219797A1|2018-08-02|Technologies for pooling accelerator over fabric
FR3017222A1|2015-08-07|APPARATUS, METHOD, AND PROGRAM FOR PROCESSING INFORMATION
US20180095878A1|2018-04-05|Memory access for exactly-once messaging
US9823857B1|2017-11-21|Systems and methods for end-to-end quality of service control in distributed systems
US10901624B1|2021-01-26|Dummy host command generation for supporting higher maximum data transfer sizes |
US20200257629A1|2020-08-13|Systems and methods for streaming storage device content
US9733846B1|2017-08-15|Integrated backup performance enhancements by creating affinity groups
US20190272241A1|2019-09-05|Novel ssd architecture for fpga based acceleration
CN111427808A|2020-07-17|System and method for managing communication between a storage device and a host unit
US11237761B2|2022-02-01|Management of multiple physical function nonvolatile memory devices
US10606776B2|2020-03-31|Adding dummy requests to a submission queue to manage processing queued requests according to priorities of the queued requests
US20210318937A1|2021-10-14|Memory device having redundant media management capabilities
FR2939533A1|2010-06-11|SHARED MEMORY ACCESS SHARED TO INPUT / OUTPUT DATA
同族专利:
公开号 | 公开日
CN105068953B|2019-05-28|
GB2528351B|2016-06-22|
CN105068953A|2015-11-18|
GB201507567D0|2015-06-17|
US9304690B2|2016-04-05|
DE102015005744A1|2015-11-12|
US20150324118A1|2015-11-12|
US20160210062A1|2016-07-21|
US9557922B2|2017-01-31|
GB2528351A|2016-01-20|
FR3020886B1|2018-10-12|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

US5228568A|1991-08-30|1993-07-20|Shin-Etsu Handotai Co., Ltd.|Semiconductor wafer basket|
WO1997013708A1|1995-10-13|1997-04-17|Empak, Inc.|300 mm SHIPPING CONTAINER|
US6267245B1|1998-07-10|2001-07-31|Fluoroware, Inc.|Cushioned wafer container|
JP3938293B2|2001-05-30|2007-06-27|信越ポリマー株式会社|Precision substrate storage container and its holding member|
US7886298B2|2002-03-26|2011-02-08|Hewlett-Packard Development Company, L.P.|Data transfer protocol for data replication between multiple pairs of storage controllers on a san fabric|
US20060042998A1|2004-08-24|2006-03-02|Haggard Clifton C|Cushion for packing disks such as semiconductor wafers|
US7617377B2|2006-10-17|2009-11-10|International Business Machines Corporation|Splitting endpoint address translation cache management responsibilities between a device driver and device driver services|
JP5253410B2|2007-11-09|2013-07-31|信越ポリマー株式会社|Retainer and substrate storage container|
US20130086311A1|2007-12-10|2013-04-04|Ming Huang|METHOD OF DIRECT CONNECTING AHCI OR NVMe BASED SSD SYSTEM TO COMPUTER SYSTEM MEMORY BUS|
TWI337162B|2008-07-31|2011-02-11|Gudeng Prec Industral Co Ltd|A wafer container with constraints|
JP5483351B2|2010-06-17|2014-05-07|信越ポリマー株式会社|Substrate storage container|
US20120110259A1|2010-10-27|2012-05-03|Enmotus Inc.|Tiered data storage system with data management and method of operation thereof|
US9141571B2|2010-12-28|2015-09-22|Avago Technologies General Ip Pte. Ltd.|PCI express switch with logical device capability|
EP2761482B1|2011-09-30|2016-11-30|Intel Corporation|Direct i/o access for system co-processors|
US8966172B2|2011-11-15|2015-02-24|Pavilion Data Systems, Inc.|Processor agnostic data storage in a PCIE based shared storage enviroment|
US20130135816A1|2011-11-17|2013-05-30|Futurewei Technologies, Inc.|Method and Apparatus for Scalable Low Latency Solid State Drive Interface|
US9652182B2|2012-01-31|2017-05-16|Pavilion Data Systems, Inc.|Shareable virtual non-volatile storage device for a server|
US8554963B1|2012-03-23|2013-10-08|DSSD, Inc.|Storage system with multicast DMA and unified address space|
WO2013180691A1|2012-05-29|2013-12-05|Intel Corporation|Peer-to-peer interrupt signaling between devices coupled via interconnects|
US9256384B2|2013-02-04|2016-02-09|Avago Technologies General Ip Pte. Ltd.|Method and system for reducing write latency in a data storage system by using a command-push model|
US9298648B2|2013-05-08|2016-03-29|Avago Technologies General Ip Pte Ltd|Method and system for I/O flow management using RAID controller with DMA capabilitiy to directly send data to PCI-E devices connected to PCI-E switch|
US9021141B2|2013-08-20|2015-04-28|Lsi Corporation|Data storage controller and method for exposing information stored in a data storage controller to a host system|US10275175B2|2014-10-06|2019-04-30|Western Digital Technologies, Inc.|System and method to provide file system functionality over a PCIe interface|
US10191691B2|2015-04-28|2019-01-29|Liqid Inc.|Front-end quality of service differentiation in storage system operations|
KR20170007099A|2015-07-10|2017-01-18|삼성전자주식회사|Method of managing input/outputqueues by non volatile memory expresscontroller|
US10817528B2|2015-12-15|2020-10-27|Futurewei Technologies, Inc.|System and method for data warehouse engine|
US10423568B2|2015-12-21|2019-09-24|Microsemi Solutions , Inc.|Apparatus and method for transferring data and commands in a memory management environment|
CN107145459B|2016-03-01|2021-05-18|华为技术有限公司|System and method for remote shared access of cascade plate and SSD|
US10698634B2|2016-05-26|2020-06-30|Hitachi, Ltd.|Computer system and data control method utilizing NVMe and storing commands including an offset address corresponding to a server in a queue|
CN112347012A|2016-06-20|2021-02-09|北京忆芯科技有限公司|SR-IOVsupporting NVMecontroller and method|
CN111352873B|2016-06-30|2021-10-08|北京忆芯科技有限公司|NVMe protocol command processing method and device|
CN107783916B|2016-08-26|2020-01-31|深圳大心电子科技有限公司|Data transmission method, storage controller and list management circuit|
US10445018B2|2016-09-09|2019-10-15|Toshiba Memory Corporation|Switch and memory device|
CN107818056B|2016-09-14|2021-09-07|华为技术有限公司|Queue management method and device|
US20180088978A1|2016-09-29|2018-03-29|Intel Corporation|Techniques for Input/Output Access to Memory or Storage by a Virtual Machine or Container|
KR20180038813A|2016-10-07|2018-04-17|삼성전자주식회사|Storage device capable of performing peer-to-peer communication and data storage system including the same|
US20180181340A1|2016-12-23|2018-06-28|Ati Technologies Ulc|Method and apparatus for direct access from non-volatile memory to local memory|
US10521389B2|2016-12-23|2019-12-31|Ati Technologies Ulc|Method and apparatus for accessing non-volatile memory as byte addressable memory|
US10289421B2|2017-02-17|2019-05-14|Dell Products, L.P.|Booting of IHS from SSD using PCIe|
CN108628775B|2017-03-22|2021-02-12|华为技术有限公司|Resource management method and device|
US11086813B1|2017-06-02|2021-08-10|Sanmina Corporation|Modular non-volatile memory express storage appliance and method therefor|
US10579568B2|2017-07-03|2020-03-03|Intel Corporation|Networked storage system with access to any attached storage device|
US10915448B2|2017-08-22|2021-02-09|Seagate Technology Llc|Storage device initiated copy back operation|
KR20190033284A|2017-09-21|2019-03-29|삼성전자주식회사|Method and system for transmitting data between storage devices over peer-to-peer connections of PCI-express|
US10430333B2|2017-09-29|2019-10-01|Intel Corporation|Storage system with interconnected solid state disks|
US10719474B2|2017-10-11|2020-07-21|Samsung Electronics Co., Ltd.|System and method for providing in-storage accelerationin data storage devices|
KR20190064083A|2017-11-30|2019-06-10|삼성전자주식회사|Storage device and electronic device including the same|
WO2019127021A1|2017-12-26|2019-07-04|华为技术有限公司|Management method and apparatus for storage device in storage system|
CN108270685A|2018-01-31|2018-07-10|深圳市国微电子有限公司|The method, apparatus and terminal of a kind of data acquisition|
US10521378B2|2018-03-09|2019-12-31|Samsung Electronics Co., Ltd.|Adaptive interface storage device with multiple storage protocols including NVME and NVME over fabrics storage devices|
WO2020000483A1|2018-06-30|2020-01-02|华为技术有限公司|Data processing method and storage system|
CN109491948B|2018-11-19|2021-10-29|郑州云海信息技术有限公司|Data processing method and device for double ports of solid state disk|
US20200201575A1|2018-12-20|2020-06-25|Marvell IsraelLtd.|Solid-state drive with initiator mode|
US10585827B1|2019-02-05|2020-03-10|Liqid Inc.|PCIe fabric enabled peer-to-peer communications|
WO2020183444A1|2019-03-14|2020-09-17|Marvell Asia Pte, Ltd.|Transferring data between solid state drivesvia a connection between the ssds|
US11237760B2|2019-12-19|2022-02-01|Western Digital Technologies, Inc.|Measuring performance metrics for data storage devices|
CN111221757B|2019-12-31|2021-05-04|杭州熠芯科技有限公司|Low-delay PCIE DMA data transmission method and controller|
US11074202B1|2020-02-26|2021-07-27|Red Hat, Inc.|Efficient management of bus bandwidth for multiple drivers|
法律状态:
2016-05-20| PLFP| Fee payment|Year of fee payment: 2 |
2017-04-13| PLFP| Fee payment|Year of fee payment: 3 |
2017-11-24| PLSC| Search report ready|Effective date: 20171124 |
2018-04-11| PLFP| Fee payment|Year of fee payment: 4 |
2019-04-10| PLFP| Fee payment|Year of fee payment: 5 |
2020-04-14| PLFP| Fee payment|Year of fee payment: 6 |
2020-04-24| TP| Transmission of property|Owner name: WESTERN DIGITAL TECHNOLOGIES, INC., US Effective date: 20200319 |
2021-04-12| PLFP| Fee payment|Year of fee payment: 7 |
优先权:
申请号 | 申请日 | 专利标题
US14/272,214|US9304690B2|2014-05-07|2014-05-07|System and method for peer-to-peer PCIe storage transfers|
US14272214|2014-05-07|
[返回顶部]